# Long-Text Processing
Qwen3 14B Unsloth Bnb 4bit
Apache-2.0
Qwen3 is the latest generation of large language models in the Tongyi Qianwen series, offering both dense models and mixture-of-experts (MoE) models. Through large-scale training, Qwen3 achieves breakthrough progress in reasoning capabilities, instruction following, agent functionalities, and multilingual support.
Large Language Model
Transformers English

Q
unsloth
68.67k
5
Qwen3 14B GGUF
Apache-2.0
Qwen3 is the latest large language model developed by Alibaba Cloud, featuring powerful reasoning, instruction-following, and multilingual support capabilities, with the ability to switch between thinking and non-thinking modes.
Large Language Model English
Q
unsloth
81.29k
40
Trillion 7B Preview
Apache-2.0
The Trillion-7B Preview is a multilingual large language model supporting English, Korean, Japanese, and Chinese. It achieves performance competitive with higher-computation models while maintaining lower computational requirements.
Large Language Model
Transformers Supports Multiple Languages

T
trillionlabs
6,864
82
Gemma 3 12b It GGUF
Gemma-3-12b-it is a large language model developed by Google, based on the transformer architecture, focusing on text generation tasks.
Large Language Model
G
second-state
583
1
Gemma 7b Aps It
Gemma-APS is a generative model for Abstract Proposition Segmentation (APS), capable of breaking down text paragraphs into independent facts, statements, and opinions.
Large Language Model
Transformers

G
google
161
33
Jais Family 2p7b
Apache-2.0
The Jais series is a bilingual large language model specifically optimized for Arabic and English, featuring variants ranging from 590 million to 70 billion parameters.
Large Language Model Supports Multiple Languages
J
inceptionai
174
5
Midnight Miqu 70B V1.0
Other
A 70B-parameter large language model based on SLERP fusion technology combining the strengths of miqu-1-70b-sf and Midnight-Rose-70B, optimized for role-playing and story creation
Large Language Model
Transformers

M
sophosympatheia
393
62
OPEN SOLAR KO 10.7B GGUF
Apache-2.0
This is a GGUF-format quantized version of the beomi/OPEN-SOLAR-KO-10.7B model, supporting 2-8 bit quantization levels, suitable for Korean and English text generation tasks.
Large Language Model Supports Multiple Languages
O
MaziyarPanahi
86
1
Mgpt 1.3B Yakut
MIT
This is a 1.3 billion parameter Yakut language model, fine-tuned based on mGPT-XL (1.3B), supporting Yakut text generation tasks.
Large Language Model
Transformers Supports Multiple Languages

M
ai-forever
31
6
Featured Recommended AI Models